In [ ]:
# You may need to restart your notebook kernel after updating the kfp sdk
!python3 -m pip install kfp --upgrade --user
In [ ]:
EXPERIMENT_NAME = 'Simple notebook pipeline' # Name of the experiment in the UI
BASE_IMAGE = 'tensorflow/tensorflow:2.0.0b0-py3' # Base image used for components in the pipeline
In [ ]:
import kfp
import kfp.dsl as dsl
from kfp import compiler
from kfp import components
In [ ]:
@dsl.python_component(
name='add_op',
description='adds two numbers',
base_image=BASE_IMAGE # you can define the base image here, or when you build in the next step.
)
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
print(a, '+', b, '=', a + b)
return a + b
In [ ]:
# Convert the function to a pipeline operation.
add_op = components.func_to_container_op(
add,
base_image=BASE_IMAGE,
)
In [ ]:
@dsl.pipeline(
name='Calculation pipeline',
description='A toy pipeline that performs arithmetic calculations.'
)
def calc_pipeline(
a: float =0,
b: float =7
):
#Passing pipeline parameter and a constant value as operation arguments
add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance.
#You can create explicit dependency between the tasks using xyz_task.after(abc_task)
add_2_task = add_op(a, b)
add_3_task = add_op(add_task.output, add_2_task.output)
Kubeflow Pipelines lets you group pipeline runs by Experiments. You can create a new experiment, or call kfp.Client().list_experiments()
to see existing ones.
If you don't specify the experiment name, the Default
experiment will be used.
You can directly run a pipeline given its function definition:
In [ ]:
# Specify pipeline argument values
arguments = {'a': '7', 'b': '8'}
# Launch a pipeline run given the pipeline function definition
kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments,
experiment_name=EXPERIMENT_NAME)
# The generated links below lead to the Experiment page and the pipeline run details page, respectively
Alternately, you can separately compile the pipeline and then upload and run it as follows:
In [ ]:
# Compile the pipeline
pipeline_func = calc_pipeline
pipeline_filename = pipeline_func.__name__ + '.pipeline.zip'
compiler.Compiler().compile(pipeline_func, pipeline_filename)
In [ ]:
# Get or create an experiment
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
Submit the compiled pipeline for execution:
In [ ]:
# Specify pipeline argument values
arguments = {'a': '7', 'b': '8'}
# Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
# The generated link below leads to the pipeline run information page.
You just created and deployed your first pipeline in Kubeflow! You can put more complex python code within the functions, and you can import any libraries that are included in the base image (you can use VersionedDependencies to import libraries not included in the base image).
Copyright 2019 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.